联合分析是一种流行的实验设计,用于测量多维偏好。研究人员研究了在控制其他相关因素的同时如何影响决策。当前,存在两种方法学方法来分析联合实验的数据。第一个重点是估计每个因素的平均边际效应,同时平均其他因素。尽管这允许基于直接设计的估计,但结果严重取决于其他因素的分布以及相互作用效应的汇总方式。一种基于模型的替代方法可以计算各种兴趣,但要求研究人员正确指定模型,这是与许多因素和可能的相互作用的联合分析的挑战性任务。此外,在合并相互作用时,常用的逻辑回归即使具有适度的因素,统计特性也很差。我们提出了一种基于条件随机测试的新假设检验方法,以回答联合分析的最基本问题:考虑到其他因素,感兴趣的因素是否重要?我们的方法仅基于因素的随机化,因此没有假设。但是,它允许研究人员使用任何测试统计量,包括基于复杂的机器学习算法的统计量。结果,我们能够结合现有的基于设计和基于模型的方法的优势。我们通过对移民偏好和政治候选评估的联合分析来说明拟议的方法。我们还扩展了提出的方法来测试联合分析中常用的规律性假设。可以使用开源软件包来实施建议的方法。
translated by 谷歌翻译
在这项工作中,我们分析了一种高效的采样算法,用于通用可达性分析,这仍然是一种令人难度的挑战性问题,其应用范围从神经网络验证到动态系统的安全分析。通过采样输入,评估其在真正可到达的集合中的图像,并将其$ \ epsilon $ -padded凸壳作为集合估计器,该算法适用于一般问题设置,易于实现。我们主要贡献是使用随机集理论的渐近和有限样本精度保证的推导。该分析通知算法设计以获得$ \ epsilon $-close达到的近似值,具有很高的概率,提供了可达性问题最具挑战性的洞察力,并激励了该技术的安全关键应用。在神经网络验证任务上,我们表明这种方法比现有工作更准确,明显更快。我们的分析知情,我们还设计了一种强大的模型预测控制器,我们在硬件实验中展示。
translated by 谷歌翻译
强盗算法越来越多地用于现实世界的连续决策问题。与之相关的是能够使用所产生的数据集来支持科学问题的增加,如:一种类型的广告导致更多购买?哪些背景是移动健康干预有效?然而,当与带有强盗算法收集的数据一起使用时,经典统计方法无法提供有效的置信区间。最近已经开发了用于简单模型的替代方法(例如,手段的比较)。然而,使用使用(上下文)强盗算法收集的数据的更复杂模型,缺乏对统计推断进行统计推理的一般方法;例如,当前方法不能用于逻辑回归模型中的参数的有效推断,以获得二进制奖励。在这项工作中,我们开发理论证明使用M估算器的使用 - 这包括基于经验风险最小化的估计,以及最大可能性 - 与自适应算法收集的数据,包括(上下文)强盗算法。具体地,我们表明,用特定自适应重量修改的M估算器可用于构建用于各种推理目标的渐近有效的置信区。
translated by 谷歌翻译
In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this paper is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that (1) estimate such gradient, (2) update the policy in the descent direction, and (3) update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online marketing application.
translated by 谷歌翻译
In the era of noisy intermediate scale quantum devices, variational quantum circuits (VQCs) are currently one of the main strategies for building quantum machine learning models. These models are made up of a quantum part and a classical part. The quantum part is given by a parametrization $U$, which, in general, is obtained from the product of different quantum gates. By its turn, the classical part corresponds to an optimizer that updates the parameters of $U$ in order to minimize a cost function $C$. However, despite the many applications of VQCs, there are still questions to be answered, such as for example: What is the best sequence of gates to be used? How to optimize their parameters? Which cost function to use? How the architecture of the quantum chips influences the final results? In this article, we focus on answering the last question. We will show that, in general, the cost function will tend to a typical average value the closer the parameterization used is from a $2$-design. Therefore, the closer this parameterization is to a $2$-design, the less the result of the quantum neural network model will depend on its parametrization. As a consequence, we can use the own architecture of the quantum chips to defined the VQC parametrization, avoiding the use of additional swap gates and thus diminishing the VQC depth and the associated errors.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can ``leak'' onto the set of infinite sequences. In order to characterize the notion of leakage more precisely, this paper offers a measure-theoretic treatment of language modeling. We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense. We also generalize characterizations of tightness proposed in previous works.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译